Coding the outputs of multilayer feedforward

نویسندگان

  • Mercedes Fernández-Redondo
  • Carlos Hernández-Espinosa
چکیده

In this paper, we present an empirical comparison among four different schemes of coding the outputs of a Multilayer Feedforward networks. Results are obtained for eight different classification problems from the UCI repository of machine learning databases. Our results show that the usual codification is superior to the rest in the case of using one output unit per class. However, if we use several output units per class we can obtain an improvement in the generalization performance and in this case the noisy codification seems to be more appropriate.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Capabilities and Limitations of Feedforward Neural Networks with Multilevel Neurons

This paper proposes a multilevel logic approach to output coding using multilevel neurons in the output layer. Training convergence for a single multilevel perceptron is considered. It has been found that a multilevel neural network classifier with a reduced number of outputs is often able to learn faster and requires fewer weights. Concepts are illustrated with an example of a digit classifier.

متن کامل

The Mixture of Neural Networks Adapted to Multilayer Feedforward Architecture

The Mixture of Neural Networks (MixNN) is a Multi-Net System based on the Modular Approach. The MixNN employs a neural network to weight the outputs of the expert networks. This method decompose the original problem into subproblems, and the final decision is taken with the information provided by the expert networks and the gating network. The neural networks used in MixNN are quite simple so ...

متن کامل

Size of Multilayer Networks for Exact Learning: Analytic Approach

This article presents a new result about the size of a multilayer neural network computing real outputs for exact learning of a finite set of real samples. The architecture of the network is feedforward, with one hidden layer and several outputs. Starting from a fixed training set, we consider the network as a function of its weights. We derive, for a wide family of transfer functions, a lower ...

متن کامل

Differentiating Functions of the Jacobian with Respect to the Weights

Abstract For many problems, the correct behavior of a model depends not only on its input-output mapping but also on properties of its Jacobian matrix, the matrix of partial derivatives of the model’s outputs with respect to its inputs. We introduce the J-prop algorithm, an efficient general method for computing the exact partial derivatives of a variety of simple functions of the Jacobian of a...

متن کامل

Mixture of Neural Networks: Some Experiments with the Multilayer Feedforward Architecture

A Modular Multi-Net System consist on some networks which solve partially a problem. The original problem has been decomposed into subproblems and each network focuses on solving a subproblem. The Mixture of Neural Networks consist on some expert networks which solve the subproblems and a gating network which weights the outputs of the expert networks. The expert networks and the gating network...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2001